58 research outputs found

    Higher-Order Termination: from Kruskal to Computability

    Get PDF
    Termination is a major question in both logic and computer science. In logic, termination is at the heart of proof theory where it is usually called strong normalization (of cut elimination). In computer science, termination has always been an important issue for showing programs correct. In the early days of logic, strong normalization was usually shown by assigning ordinals to expressions in such a way that eliminating a cut would yield an expression with a smaller ordinal. In the early days of verification, computer scientists used similar ideas, interpreting the arguments of a program call by a natural number, such as their size. Showing the size of the arguments to decrease for each recursive call gives a termination proof of the program, which is however rather weak since it can only yield quite small ordinals. In the sixties, Tait invented a new method for showing cut elimination of natural deduction, based on a predicate over the set of terms, such that the membership of an expression to the predicate implied the strong normalization property for that expression. The predicate being defined by induction on types, or even as a fixpoint, this method could yield much larger ordinals. Later generalized by Girard under the name of reducibility or computability candidates, it showed very effective in proving the strong normalization property of typed lambda-calculi..

    Metallicity determination in gas-rich galaxies with semiempirical methods

    Get PDF
    A study of the precision of the semiempirical methods used in the determination of the chemical abundances in gas-rich galaxies is carried out. In order to do this the oxygen abundances of a total of 438 galaxies were determined using the electronic temperature, the R23R_{23} and the P methods. The new calibration of the P method gives the smaller dispersion for the low and high metallicity regions, while the best numbers in the turnaround region are given by the R23R_{23} method. We also found that the dispersion correlates with the metallicity. Finally, it can be said that all the semiempirical methods studied here are quite insensitive to metallicity with a value of 8.0±0.28.0\pm0.2 dex for more than 50% of the total sample. \keywords{ISM: abundances; (ISM): H {\sc ii} regions}Comment: 26 pages, 9 figures and 2 tables. To appear at AJ, January 200

    A Machine Checked Model of Idempotent MGU Axioms For Lists of Equational Constraints

    Full text link
    We present formalized proofs verifying that the first-order unification algorithm defined over lists of satisfiable constraints generates a most general unifier (MGU), which also happens to be idempotent. All of our proofs have been formalized in the Coq theorem prover. Our proofs show that finite maps produced by the unification algorithm provide a model of the axioms characterizing idempotent MGUs of lists of constraints. The axioms that serve as the basis for our verification are derived from a standard set by extending them to lists of constraints. For us, constraints are equalities between terms in the language of simple types. Substitutions are formally modeled as finite maps using the Coq library Coq.FSets.FMapInterface. Coq's method of functional induction is the main proof technique used in proving many of the axioms.Comment: In Proceedings UNIF 2010, arXiv:1012.455

    Certification of nontermination proofs using strategies and nonlooping derivations

    Get PDF
    © 2014 Springer International Publishing Switzerland. The development of sophisticated termination criteria for term rewrite systems has led to powerful and complex tools that produce (non)termination proofs automatically. While many techniques to establish termination have already been formalized—thereby allowing to certify such proofs—this is not the case for nontermination. In particular, the proof checker CeTA was so far limited to (innermost) loops. In this paper we present an Isabelle/HOL formalization of an extended repertoire of nontermination techniques. First, we formalized techniques for nonlooping nontermination. Second, the available strategies include (an extended version of) forbidden patterns, which cover in particular outermost and context-sensitive rewriting. Finally, a mechanism to support partial nontermination proofs further extends the applicability of our proof checker

    A Framework for Certified Self-Stabilization

    No full text
    We propose a general framework to build certified proofs of distributed self-stabilizing algorithms with the proof assistant Coq. We first define in Coq the locally shared memory model with composite atomicity, the most commonly used model in the self-stabilizing area. We then validate our framework by certifying a non trivial part of an existing silent self-stabilizing algorithm which builds a kk-hop dominating set of the network. We also certified a quantitative property related to the output of this algorithm. Precisely, we show that the computed kk-hop dominating set contains at most n1k+1+1\lfloor \frac{n-1}{k+1} \rfloor + 1 nodes, where nn is the number of nodes in the network. To obtain these results, we also developed a library which contains general tools related to potential functions and cardinality of sets

    A static higher-order dependency pair framework

    Get PDF
    We revisit the static dependency pair method for proving termination of higher-order term rewriting and extend it in a number of ways: (1) We introduce a new rewrite formalism designed for general applicability in termination proving of higher-order rewriting, Algebraic Functional Systems with Meta-variables. (2) We provide a syntactically checkable soundness criterion to make the method applicable to a large class of rewrite systems. (3) We propose a modular dependency pair framework for this higher-order setting. (4) We introduce a fine-grained notion of formative and computable chains to render the framework more powerful. (5) We formulate several existing and new termination proving techniques in the form of processors within our framework. The framework has been implemented in the (fully automatic) higher-order termination tool WANDA

    Automatically proving termination and memory safety for programs with pointer arithmetic

    Get PDF
    While automated verification of imperative programs has been studied intensively, proving termination of programs with explicit pointer arithmetic fully automatically was still an open problem. To close this gap, we introduce a novel abstract domain that can track allocated memory in detail. We use it to automatically construct a symbolic execution graph that over-approximates all possible runs of a program and that can be used to prove memory safety. This graph is then transformed into an integer transition system, whose termination can be proved by standard techniques. We implemented this approach in the automated termination prover AProVE and demonstrate its capability of analyzing C programs with pointer arithmetic that existing tools cannot handle

    Analyzing program termination and complexity automatically with AProVE

    Get PDF
    In this system description, we present the tool AProVE for automatic termination and complexity proofs of Java, C, Haskell, Prolog, and rewrite systems. In addition to classical term rewrite systems (TRSs), AProVE also supports rewrite systems containing built-in integers (int-TRSs). To analyze programs in high-level languages, AProVE automatically converts them to (int-)TRSs. Then, a wide range of techniques is employed to prove termination and to infer complexity bounds for the resulting rewrite systems. The generated proofs can be exported to check their correctness using automatic certifiers. To use AProVE in software construction, we present a corresponding plug-in for the popular Eclipse software development environment

    Concentration or representation : the struggle for popular sovereignty

    Get PDF
    There is a tension in the notion of popular sovereignty, and the notion of democracy associated with it, that is both older than our terms for these notions themselves and more fundamental than the apparently consensual way we tend to use them today. After a review of the competing conceptions of 'the people' that underlie two very different understandings of democracy, this article will defend what might be called a 'neo-Jacobin' commitment to popular sovereignty, understood as the formulation and imposition of a shared political will. A people's egalitarian capacity to concentrate both its collective intelligence and force, from this perspective, takes priority over concerns about how best to represent the full variety of positions and interests that differentiate and divide a community
    corecore